Session D-7

Edge Computing 1

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 5:30 AM — 7:00 AM PDT
Location
Babbio 210

Adversarial Group Linear Bandits and Its Application to Collaborative Edge Inference

Yin Huang, Letian Zhang and Jie Xu (University of Miami, USA)

0
Multi-armed bandits is a classical sequential decision making under uncertainty problem, which has applications in many fields including computer and communication networks. The majority of existing works study bandits problems in either the stochastic regime or the adversarial regime, but the intersection of these two regimes is much less investigated. In this paper, we study a new bandits problem, called adversarial group linear bandits (AGLB), that features reward generation as a joint outcome of both the stochastic process and the adversarial behavior. In particular, the reward that the learner receives is a result of not only the group and arm that the learner selects but also the group-level attack decision by the adversary. AGLB models many real-world problems, such as collaborative edge inference and multi-site online ad placement. To combat the uncertainty in the coupled stochastic and adversarial rewards, we develop a new bandits algorithm, called EXPUCB, which marries the classical LinUCB and EXP3 algorithms, and prove its sublinear regret. We apply EXPUCB to the collaborative edge inference problem to evaluate its performance. Extensive simulation results verify the superior learning ability of EXPUCB under coupled stochastic noises and adversarial attacks.
Speaker Yin Huang (University of Miami)

He is a Ph.D. candidate at the University of Miami, USA, and his primary research is multi-armed bandits and edge computing.


Online Container Scheduling for Data-intensive Applications in Serverless Edge Computing

Xiaojun Shang, Yingling Mao and Yu Liu (Stony Brook University, USA); Yaodong Huang (Shenzhen University, China); Zhenhua Liu and Yuanyuan Yang (Stony Brook University, USA)

1
Introducing the emerging serverless paradigm into edge computing could avoid over- and under-provisioning of limited edge resources and make complex edge resource management transparent to application developers, which largely facilitates the cost-effectiveness, portability, and short time-to-market of edge applications. However, the computation/data dispersion and device/network heterogeneity of edge environments prevent current serverless computing platforms from acclimating to the network edge. In this paper, we address such challenges by formulating a container placement and data flow routing problem, which fully considers the heterogeneity of edge networks and the overhead of operating serverless platforms on resource-limited edge servers. We design an online algorithm to solve the problem. We further show its local optimum for each arriving container and prove its theoretical guarantee to the optimal offline solution. We also conduct extensive simulations based on practical experiment results to show the advantages of the proposed algorithm over existing baselines.
Speaker Xiaojun Shang (Stony Brook University)

Xiaojun Shang received his B. Eng. degree in Information Science and Electronic Engineering from Zhejiang University, Hangzhou, China, and M.S. degree in Electronic Engineering from Columbia University, New York, USA. He is now pursuing his Ph.D. degree in Computer Engineering at Stony Brook University. His research interests lie in Edge AI, serverless edge computing, online algorithm design, virtual network functions, cloud computing. His current research focuses are enhancing processing and communication capabilities for data-intensive workflows with the edge-cloud synergy; ensuring highly reliable, efficient, and environment friendly network services in edge-cloud environments.


Dynamic Edge-centric Resource Provisioning for Online and Offline Services Co-location

Tao Ouyang, Kongyange Zhao, Xiaoxi Zhang, Zhi Zhou and Xu Chen (Sun Yat-sen University, China)

0
In general, online services should be quickly completed in a quite stable running environment to meet their tight latency constraint, while offline services can be processed in a loose manner for their elastic soft deadlines. To well coordinate such services at the resource-limited edge cluster, in this paper, we study an edge-centric resource provisioning optimization for online and offline services co-location, where the proxy seeks to maximize timely online service performances (e.g. completion rate) while maintaining satisfactory long-term offline service performances (e.g. average throughput). However, tricky hybrid temporal couplings for provisioning decisions arise due to heterogeneous constraints of the co-located services and their different time-scale performances. We hence first propose a reactive provisioning approach without requiring a prior knowledge of future system dynamics, which leverages a Lagrange relaxation for devising constraint-aware stochastic subgradient algorithm to deal with the challenge of hybrid couplings. To further boost the performance by integrating the powerful machine learning techniques, we also advocate a predictive provisioning approach, where the future request arrivals can be estimated accurately within a limited prediction window. With rigorous theoretical analysis and extensive trace-driven evaluations, we demonstrate the superior performance of our proposed algorithms.
Speaker Tao Ouyang (Sun Yat-sen University)

Tao Ouyang received the BS degree from the School of Information Science and Technology, University of International Relations, Beijing, China in 2017 and ME degree in 2019 from the School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China, where he is currently working toward the PhD degree with the School of Computer Science and Engineering. His research interests include mobile edge computing, online learning, and optimization.


TapFinger: Task Placement and Fine-Grained Resource Allocation for Edge Machine Learning

Yihong Li (Sun Yat-sen University, China); Tianyu Zeng (Sun Yat-Sen University, China); Xiaoxi Zhang (Sun Yat-sen University, China); Jingpu Duan (Peng Cheng Laboratory, China); Chuan Wu (The University of Hong Kong, Hong Kong)

0
Machine learning (ML) tasks are one of the major workloads in today's edge computing networks. Existing edge-cloud schedulers allocate the requested amounts of resources to each task, falling short of best utilizing the limited edge resources flexibly for ML task performance optimization. This paper proposes TapFinger, a distributed scheduler that minimizes the total completion time of ML tasks in a multi-cluster edge network, through co-optimizing task placement and fine-grained multi-resource allocation. To learn the tasks' uncertain resource sensitivity and enable distributed online scheduling, we adopt multi-agent reinforcement learning (MARL), and propose several techniques to make it efficient for our ML-task resource allocation. First, TapFinger uses a heterogeneous graph attention network as the MARL backbone to abstract inter-related state features into more learnable environmental patterns. Second, the actor network is augmented through a tailored task selection phase, which decomposes the actions and encodes the optimization constraints. Third, to mitigate decision conflicts among agents, we novelly combine Bayes' theorem and masking schemes to facilitate our MARL model training. Extensive experiments using synthetic and test-bed ML task traces show that TapFinger can achieve up to 28.6% reduction in the task completion time and improve resource efficiency as compared to state-of-the-art resource schedulers.
Speaker Yihong Li (Sun Yat-sen University)

Yihong Li received his bachelor’s degree from the School of Information Management, Sun Yat-sen University in 2021. He is currently pursuing a master’s degree with the School of Computer Science and Engineering, Sun Yat-sen University. His research interests include machine learning systems and networking.


Session Chair

Xiaonan Zhang

Session D-8

Edge Computing 2

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 7:30 AM — 9:00 AM PDT
Location
Babbio 210

Dynamic Regret of Randomized Online Service Caching in Edge Computing

Siqi Fan and I-Hong Hou (Texas A&M University, USA); Van Sy Mai (National Institute of Standards and Technology, USA)

0
This paper studies an online service caching problem, where an edge server, equipped with a prediction window of future service request arrivals, needs to decide which services to host locally subject to limited storage capacity. The edge server aims to minimize the sum of a request forwarding cost (i.e., the cost of forwarding requests to remote data centers to process) and a service instantiating cost (i.e., that of retrieving and setting up a service). Considering request patterns are usually non-stationary in practice, the performance of the edge server is measured by dynamic regret, which compares the total cost with that of the dynamic optimal offline solution. To solve the problem, we propose a randomized online algorithm with low complexity and theoretically derive an upper bound on its expected dynamic regret. Simulation results show that our algorithm significantly outperforms other state-of-the-art policies in terms of the runtime and expected total cost.
Speaker Siqi Fan (Texas A&M University)

My name is Siqi Fan, and I am a PhD candidate at Texas A&M University. My research focuses on Machine Learning, Online Optimization, and Edge Networking. 


SEM-O-RAN: Semantic and Flexible O-RAN Slicing for NextG Edge-Assisted Mobile Systems

Corrado Puligheddu (Politecnico di Torino, Italy); Jonathan Ashdown (United States Air Force, USA); Carla Fabiana Chiasserini (Politecnico di Torino & CNIT, IEIIT-CNR, Italy); Francesco Restuccia (Northeastern University, USA)

0
5G and beyond cellular networks (NextG) will support the continuous offloading of resource-expensive edge-assisted deep learning (DL) tasks. To this end, RAN (Radio Access Network) resources will need to be carefully "sliced" to satisfy heterogeneous application requirements while minimizing RAN usage. Existing slicing frameworks treat each DL task as equal and inflexibly define the required resources, which leads to sub-optimal performance. This work proposes SEM-O-RAN, the first semantic and flexible slicing framework for NextG Open RANs. Our key intuition is that different DL classifiers tolerate different levels of image compression, due to the semantic nature of the target classes. Therefore, compression can be semantically applied so that the networking load can be minimized. Moreover, flexibility allows SEM-O-RAN to consider multiple resource allocations leading to the same task-related performance, which allows for significantly more allocated tasks. First, we mathematically formulate the Semantic Flexible Edge Slicing Problem, demonstrate that it is NP-hard, and provide an approximation algorithm to solve it efficiently. Then, we evaluate the performance of SEM-O-RAN through extensive numerical analysis with state-of-the-art DL models, as well as real-world experiments on the Colosseum testbed. Our results show that SEM-O-RAN allocates up to 169% more tasks than the state of the art.
Speaker Corrado Puligheddu (Polytechnic University of Turin)

Corrado Puligheddu is an Assistant Professor at Politecnico di Torino, Turin, Italy, where he obtained his Ph.D. in Electrical, Electronics, and Communication Engineering in 2022. His research interests include 5G networks, Open RAN and Machine Learning.


Joint Task Offloading and Resource Allocation in Heterogeneous Edge Environments

Yu Liu, Yingling Mao, Zhenhua Liu, Fan Ye and Yuanyuan Yang (Stony Brook University, USA)

1
Mobile edge computing is becoming one of the ubiquitous computing paradigms to support applications requiring low latency and high computing capability. FPGA-based reconfigurable accelerators have high energy efficiency and low latency compared to general-purpose servers. Therefore, it is natural to incorporate reconfigurable accelerators in mobile edge computing systems. This paper formulates and studies the problem of joint task offloading, access point selection, and resource allocation in heterogeneous edge environments for latency minimization. Due to the heterogeneity in edge computing devices and the coupling between offloading, access point selection, and resource allocation decisions, it is challenging to optimize over them simultaneously. We decomposed the proposed problem into two disjoint subproblems and developed algorithms for them. The first subproblem is to jointly determine offloading and computing resource allocation decisions and has no polynomial-time approximation algorithm, where we developed an algorithm based on semidefinite relaxation. The second subproblem is to jointly determine access point selection and communication resource allocation decisions, where we proposed an algorithm with a provable approximation ratio of 2.62. We conducted extensive numerical simulations to evaluate the proposed algorithms. Results highlighted that the proposed algorithms outperformed baselines and were near-optimal over a wide range of settings.
Speaker Yu Liu

Yu Liu received his B. Eng. degree in Telecommunication Engineering from Xidian University, Xi'an, China. He is now pursuing his Ph.D. degree in Computer Engineering at Stony Brook University. His research interests are in online algorithms and edge computing, with a focus on the placement and resource management of virtual network functions and the reliability of service function chains.


Latency-Optimal Pyramid-based Joint Communication and Computation Scheduling for Distributed Edge Computing

Quan Chen and Kaijia Wang (Guangdong University of Technology, China); Song Guo (The Hong Kong Polytechnic University, Hong Kong); Tuo Shi (Tianjin University, China); Jing Li (The Hong Kong Polytechnic University, Hong Kong); Zhipeng Cai (Georgia State University, USA); Albert Zomaya (The University of Sydney, Australia)

0
By combing edge computing and parallel computing, distributed edge computing has emerged as a new paradigm to accelerate computation at the edge. Considering the parallelism of both computation and communication, the problem of Minimum Latency joint Communication and Computation Scheduling (MLCCS) is studied recently. However, existing works have rigid assumptions that the communication time of each device is fixed and the workload can be split arbitrarily small. Aiming at making the work more practical and general, the MLCCS problem without the above assumptions is studied in this paper. Firstly, the MLCCS problem under a general model is formulated and proved to be NP-hard. Secondly, a pyramid-based computing model is proposed to consider the parallelism of communication and computation jointly, which has an approximation ratio of 1 + δ, where δ is related to devices' communication rates. An interesting property under such computing model, i.e., the optimal latency can be obtained under arbitrary scheduling order when all devices have the same communication rate, is identified and proved. Additionally, when the workload cannot be split arbitrarily, an approximation algorithm with ratio of at most 2(1+δ) is proposed. Finally, both the simulation results and testbed experiments verify the high performance of proposed methods.
Speaker Quan Chen(Guangdong University of Technology)

Quan Chen received his BS, Master and PhD degrees in the School of Computer Science and Technology at Harbin Institute of Technology, China. He is currently an associate professor in the School of Computers at Guangdong University of Technology. In the past, he worked as a postdoctoral research fellow in the Department of Computer Science at Georgia State University. His research interests include wireless communication,networking and distributed edge computing.


Session Chair

György Dán

Session D-9

Cross-technology Communications

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM PDT
Location
Babbio 210

Enabling Direct Message Dissemination in Industrial Wireless Networks via Cross-Technology Communication

Di Mu, Yitian Chen, Xingjian Chen and Junyang Shi (State University of New York at Binghamton, USA); Mo Sha (Florida International University, USA)

0
IEEE 802.15.4 based industrial wireless networks have been widely deployed to connect sensors, actuators, and gateway in industrial facilities. Although wireless mesh networks work satisfactorily most of the time thanks to years of research, they are often complex and difficult to manage once the networks are deployed. Moreover, the deliveries of time-critical messages suffer long delay, because all messages have to go through hop-by-hop transport. Recent studies show that adding a low-power wide-area network (LPWAN) radio to each device in the network can effectively overcome such limitations, because network management and time-critical messages can be transmitted from gateway to field devices directly through long-distance LPWAN links. However, industry practitioners have shown a marked reluctance to embrace the new solution because of the high cost of hardware modification. This paper presents a novel system, namely DIrect MEssage dissemination (DIME) system, that leverages the cross-technology communication technique to enable the direct message dissemination from gateway to field devices in industrial wireless networks without the need to add a second radio to each field device. Experimental results show that our system effectively reduces the latency of delivering time-critical messages and improves network reliability compared to a state-of-the-art baseline.
Speaker Mo Sha (Florida International University)

Dr. Mo Sha is an Associate Professor in the Knight Foundation School of Computing and Information Sciences at Florida International University (FIU). Before joining FIU, he was an Assistant Professor in Computer Science at the State University of New York at Binghamton. His research interests include wireless networking, Internet of Things, network security, and cyber-physical systems. He received the NSF CAREER award in 2021 and CRII award in 2017, published more than 50 research papers, served on the technical program committees of 20 premier conferences, and reviewed papers for 21 journals.


Breaking the Throughput Limit of LED-Camera Communication via Superposed Polarization

Xiang Zou (Xi'an jiaotong University, China); Jianwei Liu (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China)

0
With the popularity of LED infrastructure and the camera on smartphone, LED-Camera visible light communication (VLC) has become a realistic and promising technology. However, the existing LED-Camera VLC has limited throughput due to the sampling manner of camera. In this paper, by introducing a polarization dimension, we propose a hybrid modulation scheme with LED and polarization signals to boost throughput. Nevertheless, directly mixing LED and polarized signals may suffer from channel conflict. We exploit well-designed packet structure and Symmetric Return-to-Zero Inverted (SRZI) coding to overcome the conflict. In addition, in the demodulation of hybrid signal, we alleviate the noise of polarization on the LED signals by the polarization background subtraction. We further propose a pixel-free approach to correct the perspective distortion caused by the shift of view angle by adding polarizers around the liquid crystal array. We build a prototype of this hybrid modulation scheme using off-the-shelf optical components. Extensive experimental results demonstrate that the hybrid modulation scheme can achieve reliable communication, achieving 13.4 kbps throughput, which is 400 % of the existing state-of-the-art LED-Camera VLC.
Speaker Xiang Zou (Xi'an Jiaotong University)

My name is Xiang Zou, I am a PhD candidate at Xi’an Jiaotong University. I have been an exchange student in Zhejiang University since 2019. I'm interested in VLC,RFID and WiFi sensing.


Parallel Cross-technology Transmission from IEEE 802.11ax to Heterogeneous IoT Devices

Dan Xia, Xiaolong Zheng, Liang Liu and Huadong Ma (Beijing University of Posts and Telecommunications, China)

0
Cross-Technology Communication (CTC) enables direct interconnection among incompatible wireless technologies. However, for the downlink from WiFi to multiple IoT technologies, serial CTC transmission has extremely low spectrum efficiency. Recent parallel CTC uses IEEE 802.11g to send emulated ZigBee signals and let BLE receiver decodes its data from emulated ZigBee signals with a dedicated codebook. It still has low spectrum efficiency because 802.11g exclusively uses the whole channel. Besides, the codebook hinders the reception on commodity BLE devices. In this paper, we propose WiTx, a parallel CTC using IEEE 802.11ax to emulate a composite signal that can be received by commodity BLE, ZigBee, and LoRa devices. Thanks to OFDMA, WiTx uses a single Resource Unit (RU) for parallel CTC and sets other RUs free for high-rate WiFi users. But such a sophisticated composite signal is easily distorted by emulation imperfections, dynamic channel noises, cyclic prefix, and center frequency offset. We propose a CTC link model that jointly models emulation and channel distortions. Then we carve the emulated signal with elaborate compensations in both time and frequency domains. We implement a prototype of WiTx on USRP and commodity devices. Experiments demonstrate WiTx achieves an efficient parallel transmission with a goodput of 390.24kbps.
Speaker Dan Xia (Beijing University of Posts and Telecommunications)

I’m Dan Xia, a fourth-year Ph.D. student in School of Computer Science, Beijing University of Posts and Telecommunications. 


LigBee: Symbol-Level Cross-Technology Communication from LoRa to ZigBee

Zhe Wang and Linghe Kong (Shanghai Jiao Tong University, China); Longfei Shangguan (Microsoft Cloud&AI, USA); Liang He (University of Colorado Denver, USA); Kangjie Xu (Shanghai Jiao Tong University, China); Yifeng Cao (Georgia Institute of Technology, USA); Hui Yu (Shanghai Jiao Tong University, China); Qiao Xiang (Xiamen University, USA); Jiadi Yu (Shanghai Jiao Tong University, China); Teng Ma (Alibaba Group, China); Zhuo Song (Alibaba Cloud & Shanghai Jiao Tong University, China); Zheng Liu (Alibaba Group & Zhejiang University, China); Guihai Chen (Shanghai Jiao Tong University, China)

0
Low-power wide-area networks (LPWAN) evolve rapidly with advanced communication primitives (e.g., coding, modulation) being continuously invented. This rapid iteration on LPWAN, however, forms a communication barrier between legacy wireless sensor nodes deployed years ago (e.g., ZigBee-based sensor node) with their latest competitor running a different communication protocol (e.g., LoRa-based IoT node): they work on the same frequency band but share different MAC- and PHY-layer regulations and thus cannot talk to each other directly. To break this barrier, we propose (LigBee), a cross-technology communication (CTC) solution that enables symbol-level communication from the latest LPWAN LoRa node to legacy ZigBee node. We have implemented (LigBee) on both software-defined radios and commercial-off-the-shelf (COTS) LoRa and ZigBee nodes, and demonstrated that (LigBee) builds a reliable CTC link from LoRa node to ZigBee node on both platforms. Our experimental results show that (i)) (LigBee) achieves a bit error rate (BER) in the order of (10^{-3}) with (70\sim 80\%) frame reception ratio (FRR), (ii)) the range of (LigBee) link is over 300(m), which is (6\sim7.5 \times) the typical range of legacy ZigBee and state-of-the-art solution, and (iii)) the throughput of (LigBee) link is maintained on the order of kbps, which is close to the LoRa's throughput.
Speaker Yifeng Cao (Georgia Institute of Technology)

Yifeng Cao is currently a 4th year Ph.D. student at Georgia Institute of Technology. His research interest includes localization and ultra-wideband radio (UWB) based sensing. He is also interested in mobile computing and autonomous driving. Yifeng is now actively looking for a job in the industry. If you are interested in his research work, he is open to any discussion through email.


Session Chair

Zhangyu Guan

Session D-10

Edge Computing 3

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 12:30 PM — 2:00 PM PDT
Location
Babbio 210

Prophet: An Efficient Feature Indexing Mechanism for Similarity Data Sharing at Network Edge

Yuchen Sun, Deke Guo, Lailong Luo, Li Liu, Xinyi Li and Junjie Xie (National University of Defense Technology, China)

0
As an emerging infrastructure, edge storage systems have attracted many efforts to efficiently distribute and share data among edge servers. However, it remains open to meeting the increasing demand of similarity data sharing. The intrinsic reason is that, the existing solutions can only return an exact data match for a query while more general edge applications require the data similar to a query input from any server. To fill this gap, this paper pioneers a new paradigm to support high-dimensional similarity search at network edges. Specifically, we propose Prophet, the first known architecture for similarity data indexing. We first divide the feature space of data into plenty of subareas, then project both subareas and edge servers into a virtual plane where the distances between any two points can reflect not only data similarity but also network latency. When any edge server submits a request for data insert, delete, or query, it computes the data feature and the virtual coordinates; then iteratively forwards the request through greedy routing based on the forwarding tables and the virtual coordinates. By Prophet, similar high-dimensional features would be stored by a common server or several nearby servers.
Speaker Yuchen Sun (National University of Defense Technology)

Yuchen Sun received a B.S. in Telecommunication Engineering from the Huazhong University of Science and Technology, Wuhan, China, in 2018. He has been with the School of System Engineering, National University of Defense Technology, Changsha, where he is currently a PhD candidate. His research interests include Trustworthy Artificial Intelligence, Edge Computing and Wireless Indoor Localization.


DeepFT: Fault-Tolerant Edge Computing using a Self-Supervised Deep Surrogate Model

Shreshth Tuli and Giuliano Casale (Imperial College London, United Kingdom (Great Britain)); Ludmila Cherkasova (ARM Research, USA); Nicholas Jennings (Loughborough University, United Kingdom (Great Britain))

2
The emergence of latency-critical AI applications has been supported by the evolution of the edge computing paradigm. However, edge solutions are typically resource-constrained, posing reliability challenges due to heightened contention for compute capacities and faulty application behavior in the presence of overload conditions. Although a large amount of generated log data can be mined for fault prediction, labeling this data for training is a manual process and thus a limiting factor for automation. Due to this, many companies resort to unsupervised fault-tolerance models. Yet, failure models of this kind can incur a loss of accuracy when they need to adapt to non-stationary workloads and diverse host characteristics. Thus, we propose a novel modeling approach, DeepFT, to proactively avoid system overloads and their adverse effects by optimizing the task scheduling decisions. DeepFT uses a deep-surrogate model to accurately predict and diagnose faults in the system and co-simulation based self-supervised learning to dynamically adapt the model in volatile settings. Experimentation on an edge cluster shows that DeepFT can outperform state-of-the-art methods in fault-detection and QoS metrics. Specifically, DeepFT gives the highest F1 scores for fault-detection, reducing service deadline violations by up to 37% while also improving response time by up to 9%.
Speaker Shreshth Tuli

Shreshth Tuli is a President's Ph.D. Scholar at the Department of Computing, Imperial College London, UK. Prior to this he was an undergraduate student at the Department of Computer Science and Engineering at Indian Institute of Technology - Delhi, India. He has worked as a visiting research fellow at the CLOUDS Laboratory, School of Computing and Information Systems, the University of Melbourne, Australia. He is a national level Kishore Vaigyanik Protsahan Yojana (KVPY) scholarship holder from the Government of India for excellence in science and innovation. His research interests include Internet of Things (IoT), Fog Computing and Deep Learning.


Marginal Value-Based Edge Resource Pricing and Allocation for Deadline-Sensitive Tasks

Puwei Wang and Zhouxing Sun (Renmin University of China, China); Ying Zhan (Guizhou University of Finance and Economics, China); Haoran Li and Xiaoyong Du (Renmin University of China, China)

0
In edge computing (EC), the resource allocation is to allocate computing, storage and networking resources on the edge nodes (ENs) efficiently and reasonably to tasks generated by users. Due to the resource-limitation of ENs, the tasks often need to compete for the resources. Pricing mechanisms are widely used to deal with the resource allocation problem, and the valuations of tasks play a critical role in the price mechanisms. However, the users naturally are not willing to expose the valuations of their tasks. In this paper, we introduce the marginal value to estimate the valuations of tasks, and propose a marginal value-based pricing mechanism using the incentive theory, which gives the incentives to the tasks with higher marginal values to actively request the more resources. The EC platform first sets the resource prices, and then the users determine the resource requests relying on the resource prices and the valuations of their tasks. After receiving the deadline-sensitive tasks from the users, the resource allocation can be modeled as a 0-1 knapsack problem with the deadline constraints of tasks. Extensive experimental results demonstrate that our approach is computationally efficient and is promising in enhancing the utility of the EC platform and the tasks.
Speaker Puwei Wang

Puwei Wang is an associate professor in School Information, Renmin University of China. His current research is on blockchain, service computing and edge computing.


Digital Twin-Enabled Service Satisfaction Enhancement in Edge Computing

Jing Li (The Hong Kong Polytechnic University, Hong Kong); Jianping Wang (City University of Hong Kong, Hong Kong); Quan Chen (Guangdong University of Technology, China); Yuchen Li (The Australian National University, Australia); Albert Zomaya (The University of Sydney, Australia)

0
The emerging digital twin technique enhances the network management efficiency and provides comprehensive insights, through mapping physical objects to their digital twins. The user satisfaction on digital twin-enabled query services relies on the freshness of digital twin data, which is measured by the Age of Information (AoI). Mobile Edge Computing (MEC), as a promising technology, offers real-time data communication between physical objects and their digital twins at the edge of the core network. However, the mobility of physical objects and dynamic query arrivals in MEC networks make efficient service provisioning in MEC become challenging. In this paper, we investigate the dynamic digital twin placement for augmenting user service satisfaction in MEC. We focus on two user service satisfaction augmentation problems in an MEC network, i.e., the static and dynamic utility maximization problems. We first formulate an Integer Linear Programming (ILP) solution and a performance-guaranteed approximation algorithm for the static utility maximization problem. We then devise an online algorithm for the dynamic utility maximization problem with a provable competitive ratio. Finally, we evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are promising.
Speaker Jing Li (The Hong Kong Polytechnic University)

Jing Li received the PhD degree and the BSc degree with the first class Honours from The Australian National University. He is currently a postdoctoral fellow at The Hong Kong Polytechnic University. His research interests include edge computing, internet of things, digital twin, network function virtualization, and combinatorial optimization.


Session Chair

Hao Wang


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.